Goto

Collaborating Authors

 young child


Disney advert banned for showing 'disturbing' severed body

BBC News

Disney advert banned for showing'disturbing' severed body A menacing Disney advert featuring a severed body has been banned by the advertising regulator, which said it was likely to frighten and cause distress to children. The Advertising Standards Authority (ASA) found the entertainment giant had broken its rules with its advert for the Predator Badlands film. Parents complained that the digital poster, which featured a large alien holding aloft the severed body of a smaller, human figure, was inappropriate and disturbing for young children. Disney said the severed body was actually that of a robot, and the fact it had been cut in two further emphasised its non-human nature. The advert, which was seen on the roadside in Giffnock, Glasgow, was promoting the Disney sci-fi film ahead of its release in November.


Roblox's AI-Powered Age Verification Is a Complete Mess

WIRED

Roblox's AI-Powered Age Verification Is a Complete Mess Kids are being identified as adults--and vice versa--on Roblox, while age-verified accounts are already being sold online. Just days after launching, Roblox's much-hyped AI-powered age verification system is a complete mess. Roblox's face scanning system, which estimates peoples' ages before they can access the platform's chat functions, rolled out in the US and other countries around the world last week, after initially launching in a few locations in December. Roblox says it is implementing the system to allow users to safely chat with users of similar ages. But players are already in revolt because they can no longer chat to their friends, developers are demanding Roblox roll back the update, and crucially, experts say that not only is the AI mis-aging young players as adults and vice versa, the system does little to help address the problem it was designed to tackle: the flood of predators using the platform to groom young children.


Do Persona-Infused LLMs Affect Performance in a Strategic Reasoning Game?

Licato, John, Steinle, Stephen, Hollis, Brayden

arXiv.org Artificial Intelligence

Although persona prompting in large language models appears to trigger different styles of generated text, it is unclear whether these translate into measurable behavioral differences, much less whether they affect decision-making in an adversarial strategic environment that we provide as open-source. We investigate the impact of persona prompting on strategic performance in PERIL, a world-domination board game. Specifically, we compare the effectiveness of persona-derived heuristic strategies to those chosen manually. Our findings reveal that certain personas associated with strategic thinking improve game performance, but only when a mediator is used to translate personas into heuristic values. We introduce this mediator as a structured translation process, inspired by exploratory factor analysis, that maps LLM-generated inventory responses into heuristics. Results indicate our method enhances heuristic reliability and face validity compared to directly inferred heuristics, allowing us to better study the effect of persona types on decision making. These insights advance our understanding of how persona prompting influences LLM-based decision-making and propose a heuristic generation method that applies psychometric principles to LLMs.


Prompting Science Report 4: Playing Pretend: Expert Personas Don't Improve Factual Accuracy

Basil, Savir, Shapiro, Ina, Shapiro, Dan, Mollick, Ethan, Mollick, Lilach, Meincke, Lennart

arXiv.org Artificial Intelligence

This is the fourth in a series of short reports that help business, education, and policy leaders understand the technical details of working with AI through rigorous testing. Here, we ask whether assigning personas to models improves performance on difficult objective multiple - choice questions. We study both domain - specific expert personas and low - knowledge personas, evaluating six models on GPQA Diamond (Rein et al. 2024) and MMLU - Pro (Wang et al. 2024), graduate - level questions spanning science, engineering, and law. We tested three approaches: In-Domain Experts: Assigning the model an expert persona ("you are a physics expert") matched to the problem type (physics problems) had no significant impact on performance (with the exception of the Gemini 2.0 Flash model). Off-Domain Experts (Domain-Mismatched): Assigning the model an expert persona ("you are a physics expert") not matched to the problem type (law problems) resulted in marginal differences. Low-Knowledge Personas: We assigned the model negative capability personas (layperson, young child, toddler), which were generally harmful to benchmark accuracy. Across both benchmarks, persona prompts generally did not improve accuracy relative to a no-persona baseline. Expert personas showed no consistent benefit across models, with few exceptions.


Young children's anthropomorphism of an AI chatbot: Brain activation and the role of parent co-presence

Kim, Pilyoung, Chin, Jenna H., Xie, Yun, Brady, Nolan, Yeh, Tom, Yang, Sujin

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) chatbots powered by a large language model (LLM) are entering young children's learning and play, yet little is known about how young children construe these agents or how such construals relate to engagement. We examined anthropomorphism of a social AI chatbot during collaborative storytelling and asked how children's attributions related to their behavior and prefrontal activation. Children at ages 5-6 (N = 23) completed three storytelling sessions: interacting with (1) an AI chatbot only, (2) a parent only, and (3) the AI and a parent together. After the sessions, children completed an interview assessing anthropomorphism toward both the AI chatbot and the parent. Behavioral engagement was indexed by the conversational turn count (CTC) ratio, and concurrent fNIRS measured oxygenated hemoglobin in bilateral vmPFC and dmPFC regions. Children reported higher anthropomorphism for parents than for the AI chatbot overall, although AI ratings were relatively high for perceptive abilities and epistemic states. Anthropomorphism was not associated with CTC. In the right dmPFC, higher perceptive scores were associated with greater activation during the AI-only condition and with lower activation during the AI+Parent condition. Exploratory analyses indicated that higher dmPFC activation during the AI-only condition correlated with higher end-of-session "scared" mood ratings. Findings suggest that stronger perceptive anthropomorphism can be associated with greater brain activation related to interpreting the AI's mental states, whereas parent co-presence may help some children interpret and regulate novel AI interactions. These results may have design implications for encouraging parent-AI co-use in early childhood.


He Hunted Alleged Groomers on Roblox. Then the Company Banned Him

WIRED

YouTuber "Schlep built a huge following tracking down alleged child predators on Roblox before being kicked off. The platform is facing multiple lawsuits over child safety. Last month, Kentucky attorney general Russell Coleman announced the details of yet another lawsuit against Roblox over suspected pedophiles lurking on the hugely popular gaming platform. While doing so, Coleman singled out the work of one self-described "predator hunter" who claims to have helped identify alleged abusers mixing with young gamers. "Roblox is even trying to silence those who raised these security risks," Coleman said. "The famous case of one of their developers, Schlep, immediately comes to mind." Schlep is in fact Michael, a 22-year-old Texan who has spent the last two years working with a group of other Roblox players to track down and identify people purportedly seeking to groom young children on the platform--predators like the one Schlep says allegedly groomed him a decade ago, which he says led ...


Zohran Annoyed a Lot of New York Public School Parents With This One. But He's Got a Point.

Slate

The many ways we've tried to identify gifted 4-year-olds, and how they've failed. When I was a kindergartner in the 1980s, the "gifted" programming for my class could be found inside of a chest. I don't know what toys and learning materials lived there, since I wasn't one of the handful of presumably more academically advanced kiddos that my kindergarten teacher invited to open the chest. My distinct impression at the time was that my teacher didn't think I was worthy of the enrichment because I frequently spilled my chocolate milk at lunch and I had also once forgotten to hang a sheet of paper on the class easel--instead painting an elaborate and detailed picture on the stand itself. The withering look on my teacher's face after seeing the easel assured me that gifted I was not.


Behold, the pumpkin king: A 2,346 pound gourd

Popular Science

Brandon Dawson's prize-winning pumpkin weighs as much as a bison. Breakthroughs, discoveries, and DIY tips sent every weekday. After narrowly missing the title last year, electrical vehicle engineer Brandon Dawson won the top prize at the Safeway World Championship Pumpkin Weigh-Off in Half Moon Bay, California. His humongous gourd weighed a staggering 2,346 pounds. The annual pumpkin weighing contest has been likened to the Super Bowl of pumpkin growing.


Signs of dyslexia and reading troubles can be spotted in kindergarten -- or even preschool

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Vanessa Silver, who tutors young children with dyslexia, works with Liina Yerro, 9, in Granada Hills. This is read by an automated voice. Please report any issues or inconsistencies here . California to begin universal screening of kindergarten through second-grade students for reading difficulties, including dyslexia.


'My son genuinely believed it was real': Parents are letting little kids play with AI. Are they wrong?

The Guardian

'My son genuinely believed it was real': Parents are letting little kids play with AI. Some believe AI can spark their child's imagination through personalized stories and generative images. Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his "super loquacious" four-year-old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed. "He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone," recalled Josh, who lives in north-west Ohio. "I thought he would finish the story and the phone would turn off."